36 research outputs found

    Towards a High Quality Real-Time Graphics Pipeline

    Get PDF
    Modern graphics hardware pipelines create photorealistic images with high geometric complexity in real time. The quality is constantly improving and advanced techniques from feature film visual effects, such as high dynamic range images and support for higher-order surface primitives, have recently been adopted. Visual effect techniques have large computational costs and significant memory bandwidth usage. In this thesis, we identify three problem areas and propose new algorithms that increase the performance of a set of computer graphics techniques. Our main focus is on efficient algorithms for the real-time graphics pipeline, but parts of our research are equally applicable to offline rendering. Our first focus is texture compression, which is a technique to reduce the memory bandwidth usage. The core idea is to store images in small compressed blocks which are sent over the memory bus and are decompressed on-the-fly when accessed. We present compression algorithms for two types of texture formats. High dynamic range images capture environment lighting with luminance differences over a wide intensity range. Normal maps store perturbation vectors for local surface normals, and give the illusion of high geometric surface detail. Our compression formats are tailored to these texture types and have compression ratios of 6:1, high visual fidelity, and low-cost decompression logic. Our second focus is tessellation culling. Culling is a commonly used technique in computer graphics for removing work that does not contribute to the final image, such as completely hidden geometry. By discarding rendering primitives from further processing, substantial arithmetic computations and memory bandwidth can be saved. Modern graphics processing units include flexible tessellation stages, where rendering primitives are subdivided for increased geometric detail. Images with highly detailed models can be synthesized, but the incurred cost is significant. We have devised a simple remapping technique that allowsfor better tessellation distribution in screen space. Furthermore, we present programmable tessellation culling, where bounding volumes for displaced geometry are computed and used to conservatively test if a primitive can be discarded before tessellation. We introduce a general tessellation culling framework, and an optimized algorithm for rendering of displaced Bézier patches, which is expected to be a common use case for graphics hardware tessellation. Our third and final focus is forward-looking, and relates to efficient algorithms for stochastic rasterization, a rendering technique where camera effects such as depth of field and motion blur can be faithfully simulated. We extend a graphics pipeline with stochastic rasterization in spatio-temporal space and show that stochastic motion blur can be rendered with rather modest pipeline modifications. Furthermore, backface culling algorithms for motion blur and depth of field rendering are presented, which are directly applicable to stochastic rasterization. Hopefully, our work in this field brings us closer to high quality real-time stochastic rendering

    Efficient multi-view ray tracing using edge detection and shader reuse

    Get PDF
    Stereoscopic rendering and 3D stereo displays are quickly becoming mainstream. The natural extension is autostereoscopic multi-view displays, which by the use of parallax barriers or lenticular lenses, can accommodate many simultaneous viewers without the need for active or passive glasses. As these displays, for the foreseeable future, will support only a rather limited number of views, there is a need for high-quality interperspective antialiasing. We present a specialized algorithm for efficient multi-view image generation from a camera line using ray tracing, which builds on previous methods for multi-dimensional adaptive sampling and reconstruction of light elds. We introduce multi-view silhouette edges to detect sharp geometrical discontinuities in the radiance function. These are used to significantly improve the quality of the reconstruction. In addition, we exploit shader coherence by computing analytical visibility between shading points and the camera line, and by sharing shading computations over the camera line

    Extracting Triangular 3D Models, Materials, and Lighting From Images

    Full text link
    We present an efficient method for joint optimization of topology, materials and lighting from multi-view image observations. Unlike recent multi-view reconstruction approaches, which typically produce entangled 3D representations encoded in neural networks, we output triangle meshes with spatially-varying materials and environment lighting that can be deployed in any traditional graphics engine unmodified. We leverage recent work in differentiable rendering, coordinate-based networks to compactly represent volumetric texturing, alongside differentiable marching tetrahedrons to enable gradient-based optimization directly on the surface mesh. Finally, we introduce a differentiable formulation of the split sum approximation of environment lighting to efficiently recover all-frequency lighting. Experiments show our extracted models used in advanced scene editing, material decomposition, and high quality view interpolation, all running at interactive rates in triangle-based renderers (rasterizers and path tracers). Project website: https://nvlabs.github.io/nvdiffrec/ .Comment: Project website: https://nvlabs.github.io/nvdiffrec

    Backface culling for motion blur and depth of field

    No full text
    For triangles with linear vertex motion, common practice is to backface cull a triangle if it is backfacing at both the start and end of the motion. However, this is not conservative. We derive conservative tests that guarantee that a moving triangle is backfacing over an entire time interval and over the area of a lens. In addition, we present tests for the special cases of only motion blur and only depth of field. Our techniques apply to real-time and offline rendering, and to both stochastic point sampling and analytical visibility methods. The rendering errors introduced by the previous technique can easily be detected for large defocus blur, but in the majority of cases, the errors introduced are hard to detect. We conclude that our tests are needed if one needs guaranteed artifact-free images. Finally, as a side result, we derive time-continuous Bézier edge equations

    Erik M˚ansson ∗ Lund University/TAT AB Deep Coherent Ray Tracing

    No full text
    fairy sponza oldTree newTree dragon Figure 1: The example scenes used for evaluating our reordering heuristics and coherence measures. All materials in the scenes are reflective in order to study the behavior of secondary rays. Fairy is an example of the ”teapot in a stadium ” problem with a small detailed model in a simple large environment. Sponza is a standard benchmark model. Two tree-scenes with and without leaves (newtree and oldtree respectively) ensure complex traversal paths for secondary rays. Dragon is a scene with four reflective Stanford Dragons in a Cornell box. Tracing secondary rays, such as reflection, refraction and shadow rays, can often be the most costly step in a modern real-time ray tracer. In this paper, we examine this problem by using suitable ray coherence measures and present a thorough evaluation of different reordering heuristics for secondary rays. We also present a simple system design for more coherent scene traversal by caching secondary rays and using sorted packet-tracing. Although the results are only slightly incremental to current research, we believe this study is an interesting contribution for further research in the field
    corecore